Multi-Penalty Regularization with a Component-Wise Penalization

نویسندگان

  • Valeriya Naumova
  • Sergei V Pereverzyev
چکیده

We discuss a new regularization scheme for reconstructing the solution of a linear ill-posed operator equation from given noisy data in the Hilbert space setting. In this new scheme, the regularized approximation is decomposed into several components, which are defined by minimizing a multi-penalty functional. We show theoretically and numerically that under a proper choice of the regularization parameters, the regularized approximation exhibits the so-called compensatory property, in the sense that it performs similar to the best of the single-penalty regularization with the same penalizing operator.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On the balancing principle for some problems of Numerical Analysis

We discuss a choice of weight in penalization methods. The motivation for the use of penalization in Computational Mathematics is to improve the conditioning of the numerical solution. One example of such improvement is a regularization, where a penalization substitutes ill-posed problem for wellposed one. In modern numerical methods for PDE a penalization is used, for example, to enforce a con...

متن کامل

Minimization of multi-penalty functionals by alternating iterative thresholding and optimal parameter choices

Inspired by several recent developments in regularization theory, optimization, and signal processing, we present and analyze a numerical approach to multipenalty regularization in spaces of sparsely represented functions. The sparsity prior is motivated by the largely expected geometrical/structured features of high-dimensional data, which may not be well-represented in the framework of typica...

متن کامل

Inverse Problems with Second-order Total Generalized Variation Constraints

Total Generalized Variation (TGV) has recently been introduced as penalty functional for modelling images with edges as well as smooth variations [2]. It can be interpreted as a “sparse” penalization of optimal balancing from the first up to the kth distributional derivative and leads to desirable results when applied to image denoising, i.e., L-fitting with TGV penalty. The present paper studi...

متن کامل

Convergence Analysis of Multilayer Feedforward Networks Trained with Penalty Terms: a Review

Gradient descent method is one of the popular methods to train feedforward neural networks. Batch and incremental modes are the two most common methods to practically implement the gradient-based training for such networks. Furthermore, since generalization is an important property and quality criterion of a trained network, pruning algorithms with the addition of regularization terms have been...

متن کامل

Manifold learning via Multi-Penalty Regularization

Manifold regularization is an approach which exploits the geometry of the marginal distribution. The main goal of this paper is to analyze the convergence issues of such regularization algorithms in learning theory. We propose a more general multi-penalty framework and establish the optimal convergence rates under the general smoothness assumption. We study a theoretical analysis of the perform...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013